11 research outputs found

    A Language and Methodology based on Scenarios, Grammars and Views, for Administrative Business Processes Modelling

    Get PDF
    International audienceIn Business Process Management (BPM), process modelling has been solved in various ways. However, there are no commonly accepted modelling tools (languages). Some of them are criticized for their inability to capture both the lifecycle, informational and organizational models of processes. For some others, process modelling is generally done using a single graph; this does not facilitate modularity, maintenance and scalability. In addition, some of these languages are very general; hence, their application to specific domain processes (such as administrative processes) is very complex. In this paper, we present a new language and a new methodology, dedicated to administrative process modelling. This language is based on a variant of attributed grammars and is able to capture the lifecycle, informational and organizational models of such processes. Also, it proposes a simple graphical formalism allowing to model each process's execution scenario as an annotated tree (modularity). In the new language, a particular emphasis is put on modelling (using "views") the perceptions that actors have on processes and their data

    Synthesis of space-time optimal systolic algorithms for the Cholesky factorization

    No full text
    In this paper we study the synthesis of space-time optimal systolic arrays for the Cholesky Factorization (CF). First, we discuss previous allocation methods and their application to CF. Second, stemming from a new allocation method we derive a space-time optimal array, with nearest neighbor connections, that requires 3N + Θ (1) time steps and N^2/8 + Θ (N) processors, where N is the size of the problem. The number of processors required by this new design improves the best previously known bound, N^2/6 + Θ (N), induced by previous allocation methods. This is the first contribution of the paper. The second contribution stemms from the fact that the paper also introduces a new allocation method that suggests to first perform clever index transformations on the initial dependence graph of a given system of uniform recurrent equations before applying the weakest allocation method, the projection method

    Exécution d'un graphe cubique de tùches sur un réseau bi-dimensionnel et asymptotiquement optimal

    No full text
    International audienceThis work proposes a scheduling strategy, based on re-indexing transformations, for task graphs associated with a linear timing function. This scheduling strategy is used to execute a cubical task graph, for which all the tasks have the sane execution time and inter-tasks communication delays are neglected, on a two-dimensional array of processors which is asymptotically space-optimal with respect to the timing function.Cet article prĂ©sente une stratĂ©gie d'ordonnancement des graphes de tĂąches associĂ©s Ă  une fonction de temps linĂ©aire dans le contexte de la programmation parallĂšle. Cette stratĂ©gie d'ordonnancement est utilisĂ©e pour exĂ©cuter un graphe cubique de tĂąches, dont les tĂąches ont la mĂȘme durĂ©e d'exĂ©cution et les temps de communications inter-tĂąches sont nĂ©gligĂ©s, sur un rĂ©seau de processeurs bi-dimensionnel et asymptotiquement optimal par rapport Ă  la fonction de temps. Ce rĂ©sultat amĂ©liore la meilleure borne prĂ©cĂ©demment connue

    Synthesis of space-time optimal systolic algorithms for the Cholesky factorization

    No full text
    In this paper we study the synthesis of space-time optimal systolic arrays for the Cholesky Factorization (CF). First, we discuss previous allocation methods and their application to CF. Second, stemming from a new allocation method we derive a space-time optimal array, with nearest neighbor connections, that requires 3N + Θ(1) time steps and N 2 /8 + Θ(N) processors, where N is the size of the problem. The number of processors required by this new design improves the best previously known bound, N 2 /6 + Θ(N), induced by previous allocation methods. This is the first contribution of the paper. The second contribution stemms from the fact that the paper also introduces a new allocation method that suggests to first perform index transformations on the initial dependence graph of a given system of uniform recurrent equations before applying the weakest allocation method, the projection method

    Amélioration du raisonnement dans les solveurs SAT CDCL avec la rÚgle d'extension

    No full text
    The extension rule first introduced by G. Tseitin is a simple but powerful rule that, when added to resolution, leads to an exponentially stronger proof system known as extended resolution (ER). Despite the outstanding theoretical results obtained with ER, its exploitation in practice to improve SAT solvers' efficiency still poses some challenging issues. There have been several attempts in the literature aiming at integrating the extension rule within CDCL SAT solvers but the results are in general not as promising as in theory. An important remark that can be made on these attempts is that most of them focus on reducing the sizes of the proofs using the extended variables introduced in the solver. We adopt in this work a different view. We see extended variables as a means to enhance reasoning in solvers and therefore to give them the ability of reasoning on various semantic aspects of variables. Experiments carried out on the 2018 and 2020 SAT competitions' benchmarks show the use of the extension rule in CDCL SAT solvers to be practically beneficial for both satisfiable and unsatisfiable instances.La rÚgle d'extension introduite pour la premiÚre fois par G. Tseitin est une rÚgle simple mais puissante qui, ajoutée à la résolution, conduit à un systÚme de preuves plus puissant appelé résolution étendue (ER). Malgré les résultats théoriques remarquables obtenus avec ER, son exploitation pratique pour améliorer l'efficacité des solveurs SAT pose encore quelques problÚmes. Plusieurs tentatives visant à intégrer la rÚgle d'extension aux solveurs CDCL SAT existent dans la littérature, mais les résultats ne sont en général pas aussi prometteurs qu'en théorie. Une remarque importante à faire sur ces tentatives est qu'elles se concentrent pour la plupart sur la réduction de la taille des preuves à l'aide des variables étendues introduites dans le solveur. Nous adoptons dans ce travail un point de vue différent. Nous considérons les variables étendues comme un moyen d'améliorer le raisonnement dans les solveurs et donc de leur donner la capacité de raisonner sur différents aspects sémantiques des variables. Les expérimentations réalisées sur les instances tirées des compétition SAT 2018 et 2020 montrent que l'utilisation de la rÚgle d'extension dans les solveurs CDCL est bénéfique aussi bien pour les instances satisfiables que celles insatisfiables

    Parallel Hybridization for SAT: An Efficient Combination of Search Space Splitting and Portfolio

    No full text
    International audienceSearch space splitting and portfolio are the two main approaches used in parallel SAT solving. Each of them has its strengths but also, its weaknesses. Decomposition in search space splitting can help improve speedup on satisfiable instances while competition in portfolio increases robustness. Many parallel hybrid approaches have been proposed in the literature but most of them still cope with load balancing issues that are the cause of a non-negligible overhead. In this paper, we describe a new parallel hybridization scheme based on both search space splitting and portfolio that does not require the use of load balancing mechanisms (such as dynamic work stealing).Les deux principales approches utilisĂ©es dans la rĂ©solution parallĂšle du problĂšme de satisfiabilitĂ© propositionnelle sont DPR (Diviser Pour RĂ©gner) et portfolio. Chacune d’elles comporte des forces et des faiblesses. La dĂ©composition dans DPR permet d’amĂ©liorer le speedup sur les instancessatisfiables tandis que la compĂ©tition dans les portfolios accroit la robustesse. Plusieurs approches hybrides pour la rĂ©solution parallĂšle de SAT ont Ă©tĂ© prĂ©sentĂ©es dans la littĂ©rature mais la plupart d’entre elles souffrent encore des problĂšmes dus aux mĂ©canismes de rĂ©Ă©quilibrage dynamique decharges qui sont Ă  l’origine d’un surcoĂ»t non nĂ©gligeable. Nous dĂ©crivons dans ce papier un nouveau schĂ©ma d’hybridation parallĂšle basĂ© sur les deux approches DPR et portfolio ne nĂ©cessitant pas la mise en Ɠuvre des mĂ©canismes de rĂ©Ă©quilibrage de charges (tels que le vol de tĂąche)

    A Hybrid Algorithm Based on Multi-colony Ant Optimization and Lin-Kernighan for solving the Traveling Salesman Problem

    No full text
    In this article, a hybrid heuristic algorithm is proposed to solve the Traveling Salesman Problem (TSP). This algorithm combines two main metaheuristics: optimization of multi-colony ant colonies (MACO) and Lin-Kernighan-Helsgaun (LKH). The proposed hybrid approach (MACO-LKH) is a so-called insertion and relay hybridization. It brings two major innovations: The first consists in replacing the static visibility function used in the MACO heuristic by the dynamic visibility function used in LKH. This has the consequence of avoiding long paths and favoring the choice of the shortest paths more quickly. Hence the term insertion hybridization. The second innovation consists in modifying the pheromone update strategy of MACO by that of the dynamic λ-opt mechanisms of LKH in order to optimize the solutions generated and save in execution time, hence the relay hybridization. The significance of the hybridization, is examined and validated on benchmark instances including small, medium, and large instance problems taken from the TSPlib library. The results are compared to four other state-of-the-art metaheuristic approaches. It results in that they are significantly outperformed by the proposed algorithm in terms of the quality of solutions obtained and execution time

    A data security and privacy scheme for user quality of experience in a Mobile Edge Computing-based network

    No full text
    Cloud computing has widely been used for applications that require huge computational and data storage resources. Unfortunately, with the advent of new technologies such as fifth generation of cellular networks that provide new applications like IoT, cloud computing presents many limits among which the End-To-End (E2E) latency is the main challenge. These applications generally degrade scenarios that require low latency. Mobile Edge Computing (MEC) has been proposed to solve this issue. MEC brings computing and storage resources from cloud data center to edge data center, closer to end-user equipment to reduce the E2E latency for request processing. However, MEC is vulnerable to security, data privacy, and authentication that affect the end-user Quality of Experience (QoE). It is therefore fundamental that these challenges are addressed to avoid poor user experience due to the lack of security or data privacy. In this paper, we propose a hybrid cryptographic system that uses the symmetric and asymmetric cryptographic systems, to improve data security, privacy, and user authentication in a MEC-based network. We show that our proposed scheme is secured by validating it with the Automated Validation of Internet Security Protocol and Application tool. Simulation results show that our solution consumes less computing resources

    The prediction of good physicians for prospective diagnosis using data mining

    No full text
    This work provides a predictive model for selecting the most appropriate health care practitioners, particularly physicians, to diagnose a patient. In the context of a multidisciplinary diagnosis, this paper provides a data mining model to identify a specialist physician who can participate in such a diagnosis and thus reduce the risk of errors. First, the model identifies the specialists who can diagnose a patient. Second, the model uses the calculated probabilities to provide a ranking of specialist physicians capable of making a good diagnosis. This ranking can be used to construct a group of specialists who can participate in the multidisciplinary diagnosis. A sample of 58177 patients (52% women) consulted by 11 different medical specialists was extracted from the SPARCS database. The work is based on the analysis of open health data, specifically, diseases that keep patients stable. The result of the data mining is a multinomial logistic regression model. The 10-fold cross-validation results indicate that the model provides good predictive capability for the selected data, with an average accuracy, sensitivity, specificity, and precision of 80%, 79%, 97.3%, and 82.8%. Our results show that a patient's characteristics influence the selection of a physician. In conclusion, we assert that all selected specialists are able to diagnose the patient and that some specialists have a greater ability to diagnose the disease than do others. Keywords: Data mining, Open data, Logistic regression, Multidisciplinary diagnosi

    Efficient mining of intra-periodic frequent sequences

    No full text
    Frequent Sequence Mining (FSM) is a fundamental task in data mining. Although FSM algorithms extract frequent patterns, they cannot discover patterns that periodically appear in the data. However, periodic trends are found in many areas such as market basket analysis, where discovering itemsets periodically purchased by customers can help understand periodic customer behavior. This is the task of Periodic Frequent Pattern Mining (PFPM). A major limitation common to traditional PFPM algorithms is that they reduce the periodicity between non-disjoint itemsets. They do not take into account the periods between disjoint itemsets. Thus, they find itemsets that appear periodically, but would fail to find a periodic appearance of distinct itemsets. To address this limitation, this paper extends the traditional problem of FSM with intra-periodicity and provides a theoretical background to extract intra-periodic frequent sequences. This leads to a new mining algorithm called Intra-Periodic Frequent Sequence Miner. Experimental results confirm its efficiency
    corecore